220 research outputs found

    EENED: End-to-End Neural Epilepsy Detection based on Convolutional Transformer

    Full text link
    Recently Transformer and Convolution neural network (CNN) based models have shown promising results in EEG signal processing. Transformer models can capture the global dependencies in EEG signals through a self-attention mechanism, while CNN models can capture local features such as sawtooth waves. In this work, we propose an end-to-end neural epilepsy detection model, EENED, that combines CNN and Transformer. Specifically, by introducing the convolution module into the Transformer encoder, EENED can learn the time-dependent relationship of the patient's EEG signal features and notice local EEG abnormal mutations closely related to epilepsy, such as the appearance of spikes and the sprinkling of sharp and slow waves. Our proposed framework combines the ability of Transformer and CNN to capture different scale features of EEG signals and holds promise for improving the accuracy and reliability of epilepsy detection. Our source code will be released soon on GitHub.Comment: Accepted by IEEE CAI 202

    Dynamic Bayesian Network-Based Escape Probability Estimation for Coach Fire Accidents

    Get PDF
    Coach emergency escape research is an effective measure to reduce casualties under serious vehicle fire accidents. A novel experiment method employing a wireless transducer was implemented and the head rotation speed, rotation moment and rotation duration were collected as the input variables for the classification and regression tree (CART) model. Based on this model, the classification result explicitly pointed out that the exit searching efficiency was evolving. By ignoring the last three unimportant factors from the Analytic Hierarchy Process (AHP), the ultimate Dynamic Bayesian Network (DBN) was built with the temporal part of the CART output and the time-independent part of the vehicle characteristics. Simulation showed that the most efficient exit searching period is the middle escape stage, which is 10 seconds after the emergency signal is triggered, and the escape probability clearly increases with the efficient exit searching. Furthermore, receiving emergency escape training contributes to a significant escape probability improvement of more than 10%. Compared with different failure modes, the emergency hammer layout and door reliability have a more significant influence on the escape probability improvement than aisle condition. Based on the simulation results, the escape probability will significantly drop below 0.55 if the emergency hammers, door, and aisle are all in a failure state

    Advancing Radiograph Representation Learning with Masked Record Modeling

    Full text link
    Modern studies in radiograph representation learning rely on either self-supervision to encode invariant semantics or associated radiology reports to incorporate medical expertise, while the complementarity between them is barely noticed. To explore this, we formulate the self- and report-completion as two complementary objectives and present a unified framework based on masked record modeling (MRM). In practice, MRM reconstructs masked image patches and masked report tokens following a multi-task scheme to learn knowledge-enhanced semantic representations. With MRM pre-training, we obtain pre-trained models that can be well transferred to various radiography tasks. Specifically, we find that MRM offers superior performance in label-efficient fine-tuning. For instance, MRM achieves 88.5% mean AUC on CheXpert using 1% labeled data, outperforming previous R2^2L methods with 100% labels. On NIH ChestX-ray, MRM outperforms the best performing counterpart by about 3% under small labeling ratios. Besides, MRM surpasses self- and report-supervised pre-training in identifying the pneumonia type and the pneumothorax area, sometimes by large margins.Comment: Camera ready at ICLR 2023. Code and models are available at https://github.com/RL4M/MRM-pytorc

    Interpretable and Robust AI in EEG Systems: A Survey

    Full text link
    The close coupling of artificial intelligence (AI) and electroencephalography (EEG) has substantially advanced human-computer interaction (HCI) technologies in the AI era. Different from traditional EEG systems, the interpretability and robustness of AI-based EEG systems are becoming particularly crucial. The interpretability clarifies the inner working mechanisms of AI models and thus can gain the trust of users. The robustness reflects the AI's reliability against attacks and perturbations, which is essential for sensitive and fragile EEG signals. Thus the interpretability and robustness of AI in EEG systems have attracted increasing attention, and their research has achieved great progress recently. However, there is still no survey covering recent advances in this field. In this paper, we present the first comprehensive survey and summarize the interpretable and robust AI techniques for EEG systems. Specifically, we first propose a taxonomy of interpretability by characterizing it into three types: backpropagation, perturbation, and inherently interpretable methods. Then we classify the robustness mechanisms into four classes: noise and artifacts, human variability, data acquisition instability, and adversarial attacks. Finally, we identify several critical and unresolved challenges for interpretable and robust AI in EEG systems and further discuss their future directions

    F2^2AT: Feature-Focusing Adversarial Training via Disentanglement of Natural and Perturbed Patterns

    Full text link
    Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by well-designed perturbations. This could lead to disastrous results on critical applications such as self-driving cars, surveillance security, and medical diagnosis. At present, adversarial training is one of the most effective defenses against adversarial examples. However, traditional adversarial training makes it difficult to achieve a good trade-off between clean accuracy and robustness since spurious features are still learned by DNNs. The intrinsic reason is that traditional adversarial training makes it difficult to fully learn core features from adversarial examples when adversarial noise and clean examples cannot be disentangled. In this paper, we disentangle the adversarial examples into natural and perturbed patterns by bit-plane slicing. We assume the higher bit-planes represent natural patterns and the lower bit-planes represent perturbed patterns, respectively. We propose a Feature-Focusing Adversarial Training (F2^2AT), which differs from previous work in that it enforces the model to focus on the core features from natural patterns and reduce the impact of spurious features from perturbed patterns. The experimental results demonstrated that F2^2AT outperforms state-of-the-art methods in clean accuracy and adversarial robustness

    The Blessing of Randomness: SDE Beats ODE in General Diffusion-based Image Editing

    Full text link
    We present a unified probabilistic formulation for diffusion-based image editing, where a latent variable is edited in a task-specific manner and generally deviates from the corresponding marginal distribution induced by the original stochastic or ordinary differential equation (SDE or ODE). Instead, it defines a corresponding SDE or ODE for editing. In the formulation, we prove that the Kullback-Leibler divergence between the marginal distributions of the two SDEs gradually decreases while that for the ODEs remains as the time approaches zero, which shows the promise of SDE in image editing. Inspired by it, we provide the SDE counterparts for widely used ODE baselines in various tasks including inpainting and image-to-image translation, where SDE shows a consistent and substantial improvement. Moreover, we propose SDE-Drag -- a simple yet effective method built upon the SDE formulation for point-based content dragging. We build a challenging benchmark (termed DragBench) with open-set natural, art, and AI-generated images for evaluation. A user study on DragBench indicates that SDE-Drag significantly outperforms our ODE baseline, existing diffusion-based methods, and the renowned DragGAN. Our results demonstrate the superiority and versatility of SDE in image editing and push the boundary of diffusion-based editing methods

    EPIM: Efficient Processing-In-Memory Accelerators based on Epitome

    Full text link
    The exploration of Processing-In-Memory (PIM) accelerators has garnered significant attention within the research community. However, the utilization of large-scale neural networks on Processing-In-Memory (PIM) accelerators encounters challenges due to constrained on-chip memory capacity. To tackle this issue, current works explore model compression algorithms to reduce the size of Convolutional Neural Networks (CNNs). Most of these algorithms either aim to represent neural operators with reduced-size parameters (e.g., quantization) or search for the best combinations of neural operators (e.g., neural architecture search). Designing neural operators to align with PIM accelerators' specifications is an area that warrants further study. In this paper, we introduce the Epitome, a lightweight neural operator offering convolution-like functionality, to craft memory-efficient CNN operators for PIM accelerators (EPIM). On the software side, we evaluate epitomes' latency and energy on PIM accelerators and introduce a PIM-aware layer-wise design method to enhance their hardware efficiency. We apply epitome-aware quantization to further reduce the size of epitomes. On the hardware side, we modify the datapath of current PIM accelerators to accommodate epitomes and implement a feature map reuse technique to reduce computation cost. Experimental results reveal that our 3-bit quantized EPIM-ResNet50 attains 71.59% top-1 accuracy on ImageNet, reducing crossbar areas by 30.65 times. EPIM surpasses the state-of-the-art pruning methods on PIM

    Reverse logistics pricing strategy for a green supply chain: a view of customers’ environmental awareness

    Get PDF
    The effectiveness of a reverse logistics strategy is contingent upon the successful execution of activities related to materials and product reuse. Green supply chain (GSC) in reverse logistics aims to minimize byproducts from ending up in landfills. This paper considers a retailer responsible for recycling and a manufacturer responsible for remanufacturing. Customer environmental awareness (CEA) is operationalized as customer word-of-mouth effect. We form three game theoretic models for two different scenarios with different pricing strategies, i.e. a non-cooperative pricing scenario based on Stackelberg equilibrium and Nash equilibrium, and a joint pricing scenario within a cooperative game model. The paper suggests that stakeholders are better off making their pricing and manufacturing decision in cooperation

    T-S Fuzzy Model Based H

    Get PDF
    This paper presents a double loop controller for a 7-DoF automobile electrohydraulic active suspension via T-S fuzzy modelling technique. The outer loop controller employs a modified H-infinity feedback control based on a T-S fuzzy model to provide the actuation force needed to ensure better riding comfort and handling stability. The resulting optimizing problem is transformed into a linear matrix inequalities solution issue associated with stability analysis, suspension stroke limit, and force constraints. Integrating these via parallel distributed compensation method, the feedback gains are derived to render the suspension performance dependent on the perturbation size and improve the efficiency of active suspensions. Adaptive Robust Control (ARC) is then adopted in the inner loop design to deal with uncertain nonlinearities and improve tracking accuracy. The validity of improvements attained from this controller is demonstrated by comparing with conventional Backstepping control and a passive suspension on a 7-DoF simulation example. It is shown that the T-S fuzzy model based controller can achieve favourable suspension performance and energy conservation under both mild and malevolent road inputs
    corecore